18 research outputs found
How to Securely Compute the Modulo-Two Sum of Binary Sources
In secure multiparty computation, mutually distrusting users in a network
want to collaborate to compute functions of data which is distributed among the
users. The users should not learn any additional information about the data of
others than what they may infer from their own data and the functions they are
computing. Previous works have mostly considered the worst case context (i.e.,
without assuming any distribution for the data); Lee and Abbe (2014) is a
notable exception. Here, we study the average case (i.e., we work with a
distribution on the data) where correctness and privacy is only desired
asymptotically.
For concreteness and simplicity, we consider a secure version of the function
computation problem of K\"orner and Marton (1979) where two users observe a
doubly symmetric binary source with parameter p and the third user wants to
compute the XOR. We show that the amount of communication and randomness
resources required depends on the level of correctness desired. When zero-error
and perfect privacy are required, the results of Data et al. (2014) show that
it can be achieved if and only if a total rate of 1 bit is communicated between
every pair of users and private randomness at the rate of 1 is used up. In
contrast, we show here that, if we only want the probability of error to vanish
asymptotically in block length, it can be achieved by a lower rate (binary
entropy of p) for all the links and for private randomness; this also
guarantees perfect privacy. We also show that no smaller rates are possible
even if privacy is only required asymptotically.Comment: 6 pages, 1 figure, extended version of submission to IEEE Information
Theory Workshop, 201
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
Traditionally, federated learning (FL) aims to train a single global model
while collaboratively using multiple clients and a server. Two natural
challenges that FL algorithms face are heterogeneity in data across clients and
collaboration of clients with {\em diverse resources}. In this work, we
introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD
that facilitates collective (personalized model compression) training via
\textit{knowledge distillation} (KD) among clients who have access to
heterogeneous data and resources. For personalization, we allow clients to
learn \textit{compressed personalized models} with different quantization
parameters and model dimensions/structures. Towards this, first we propose an
algorithm for learning quantized models through a relaxed optimization problem,
where quantization values are also optimized over. When each client
participating in the (federated) learning process has different requirements
for the compressed model (both in model dimension and precision), we formulate
a compressed personalization framework by introducing knowledge distillation
loss for local client objectives collaborating through a global model. We
develop an alternating proximal gradient update for solving this compressed
personalization problem, and analyze its convergence properties. Numerically,
we validate that QuPeD outperforms competing personalized FL methods, FedAvg,
and local training of clients in various heterogeneous settings.Comment: Appeared in NeurIPS2021. arXiv admin note: text overlap with
arXiv:2102.1178
Towards Characterizing Securely Computable Two-Party Randomized Functions
A basic question of cryptographic complexity is to combinatorially
characterize all randomized functions which have information-theoretic
semi-honest secure 2-party computation protocols. The corresponding question
for deterministic functions was answered almost three decades back, by
Kushilevitz (FOCS 1989). In this work, we make progress towards
understanding securely computable `randomized\u27 functions. We bring
tools developed in the study of completeness to bear on this problem. In
particular, our characterizations are obtained by considering only symmetric
functions with a combinatorial property called `simplicity\u27
(Maji et al. Indocrypt 2012).
Our main result is a complete combinatorial characterization of
randomized functions with `ternary output\u27 kernels, that have
information-theoretic semi-honest secure 2-party computation protocols. In
particular, we show that there exist simple randomized functions with
ternary output that do not have secure computation protocols. (For
deterministic functions, the smallest output alphabet size of such a
function is 5, due to an example given by Beaver, DIMACS Workshop on Distributed Computing and Cryptography 1989.)
Also, we give a complete combinatorial characterization of randomized
functions that have `2-round\u27 information-theoretic semi-honest secure
2-party computation protocols.
We also give a counter-example to a natural conjecture for the full
characterization, namely, that all securely computable simple functions have secure
protocols with a unique transcript for each output value. This conjecture
is in fact true for deterministic functions, and -- as our results above
show -- for ternary functions and for functions with 2-round secure
protocols